161 research outputs found
Maximum Resilience of Artificial Neural Networks
The deployment of Artificial Neural Networks (ANNs) in safety-critical
applications poses a number of new verification and certification challenges.
In particular, for ANN-enabled self-driving vehicles it is important to
establish properties about the resilience of ANNs to noisy or even maliciously
manipulated sensory input. We are addressing these challenges by defining
resilience properties of ANN-based classifiers as the maximal amount of input
or sensor perturbation which is still tolerated. This problem of computing
maximal perturbation bounds for ANNs is then reduced to solving mixed integer
optimization problems (MIP). A number of MIP encoding heuristics are developed
for drastically reducing MIP-solver runtimes, and using parallelization of
MIP-solvers results in an almost linear speed-up in the number (up to a certain
limit) of computing cores in our experiments. We demonstrate the effectiveness
and scalability of our approach by means of computing maximal resilience bounds
for a number of ANN benchmark sets ranging from typical image recognition
scenarios to the autonomous maneuvering of robots.Comment: Timestamp research work conducted in the project. version 2: fix some
typos, rephrase the definition, and add some more existing wor
Proof Relevant Corecursive Resolution
Resolution lies at the foundation of both logic programming and type class
context reduction in functional languages. Terminating derivations by
resolution have well-defined inductive meaning, whereas some non-terminating
derivations can be understood coinductively. Cycle detection is a popular
method to capture a small subset of such derivations. We show that in fact
cycle detection is a restricted form of coinductive proof, in which the atomic
formula forming the cycle plays the role of coinductive hypothesis.
This paper introduces a heuristic method for obtaining richer coinductive
hypotheses in the form of Horn formulas. Our approach subsumes cycle detection
and gives coinductive meaning to a larger class of derivations. For this
purpose we extend resolution with Horn formula resolvents and corecursive
evidence generation. We illustrate our method on non-terminating type class
resolution problems.Comment: 23 pages, with appendices in FLOPS 201
On Optimization Modulo Theories, MaxSMT and Sorting Networks
Optimization Modulo Theories (OMT) is an extension of SMT which allows for
finding models that optimize given objectives. (Partial weighted) MaxSMT --or
equivalently OMT with Pseudo-Boolean objective functions, OMT+PB-- is a
very-relevant strict subcase of OMT. We classify existing approaches for MaxSMT
or OMT+PB in two groups: MaxSAT-based approaches exploit the efficiency of
state-of-the-art MAXSAT solvers, but they are specific-purpose and not always
applicable; OMT-based approaches are general-purpose, but they suffer from
intrinsic inefficiencies on MaxSMT/OMT+PB problems.
We identify a major source of such inefficiencies, and we address it by
enhancing OMT by means of bidirectional sorting networks. We implemented this
idea on top of the OptiMathSAT OMT solver. We run an extensive empirical
evaluation on a variety of problems, comparing MaxSAT-based and OMT-based
techniques, with and without sorting networks, implemented on top of
OptiMathSAT and {\nu}Z. The results support the effectiveness of this idea, and
provide interesting insights about the different approaches.Comment: 17 pages, submitted at Tacas 1
Quadratic Word Equations with Length Constraints, Counter Systems, and Presburger Arithmetic with Divisibility
Word equations are a crucial element in the theoretical foundation of
constraint solving over strings, which have received a lot of attention in
recent years. A word equation relates two words over string variables and
constants. Its solution amounts to a function mapping variables to constant
strings that equate the left and right hand sides of the equation. While the
problem of solving word equations is decidable, the decidability of the problem
of solving a word equation with a length constraint (i.e., a constraint
relating the lengths of words in the word equation) has remained a
long-standing open problem. In this paper, we focus on the subclass of
quadratic word equations, i.e., in which each variable occurs at most twice. We
first show that the length abstractions of solutions to quadratic word
equations are in general not Presburger-definable. We then describe a class of
counter systems with Presburger transition relations which capture the length
abstraction of a quadratic word equation with regular constraints. We provide
an encoding of the effect of a simple loop of the counter systems in the theory
of existential Presburger Arithmetic with divisibility (PAD). Since PAD is
decidable, we get a decision procedure for quadratic words equations with
length constraints for which the associated counter system is \emph{flat}
(i.e., all nodes belong to at most one cycle). We show a decidability result
(in fact, also an NP algorithm with a PAD oracle) for a recently proposed
NP-complete fragment of word equations called regular-oriented word equations,
together with length constraints. Decidability holds when the constraints are
additionally extended with regular constraints with a 1-weak control structure.Comment: 18 page
Correct composition of dephased behavioural models
This research is supported by EPSRC grant EP/M014290/1.Scenarios of execution are commonly used to specify partial behaviour and interactions between different objects and components in a system. To avoid overall inconsistency in specifications, various automated methods have emerged in the literature to compose (behavioural) models. In recent work, we have shown how the theorem prover Isabelle can be combined with the constraint solver Z3 to efficiently detect inconsistencies in two or more behavioural models and, in their absence, generate the composition. Here, we extend our approach further and show how to generate the correct composition (as a set of valid traces) of dephased models. This work has been inspired by a problem from a medical domain where different care pathways (for chronic conditions) may be applied to the same patient with different starting points.Postprin
Subsumption Demodulation in First-Order Theorem Proving
Motivated by applications of first-order theorem proving to software
analysis, we introduce a new inference rule, called subsumption demodulation,
to improve support for reasoning with conditional equalities in
superposition-based theorem proving. We show that subsumption demodulation is a
simplification rule that does not require radical changes to the underlying
superposition calculus. We implemented subsumption demodulation in the theorem
prover Vampire, by extending Vampire with a new clause index and adapting its
multi-literal matching component. Our experiments, using the TPTP and SMT-LIB
repositories, show that subsumption demodulation in Vampire can solve many new
problems that could so far not be solved by state-of-the-art reasoners
Checking Properties Described by State Machines: On Synergy of Instrumentation, Slicing, and Symbolic Execution
We introduce a novel technique for checking properties described by finite state machines. The technique is based on a synergy of three well-known methods: instrumentation, program slicing, and symbolic execution. More precisely, we instrument a given program with a code that tracks runs of state machines representing various properties. Next we slice the program to reduce its size without affecting runs of state machines. And then we symbolically execute the sliced program to find real violations of the checked properties, i.e. real bugs. Depending on the kind of symbolic execution, the technique can be applied as a stand-alone bug finding technique, or to weed out some false positives from an output of another bug-finding tool. We provide several examples demonstrating the practical applicability of our technique.Představujeme novou techniku pro ověřování vlastností popsaných konečně-stavovými stroji. Tato technika je založena na synergii tří známých metod: instrumentace, prořezání programu a symbolické vykonání. Přesněji, instrumentujeme daný program kódem, který sleduje běh stavových strojů představujících různé vlastnosti. Dále program prořežeme, abychom zmenšili jeho velikost při zachování běhů stavových strojů. Nakonec prořezaný program symbolicky vykonáme, abychom našli skutečné porušení ověřovaných vlastností, t.j. skutečné chyby. Podle použitého druhu symbolického vykonání může být tato technika použita jako samostatná metoda pro detekci chyb nebo k vytřídění některých falešných hlášení z výstupu jiných nástrojů pro detekci chyb. Poskytujeme několik příkladů, které dokumentují praktickou použitelnost naší techniky
Speeding up the constraint-based method in difference logic
"The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-319-40970-2_18"Over the years the constraint-based method has been successfully applied to a wide range of problems in program analysis, from invariant generation to termination and non-termination proving. Quite often the semantics of the program under study as well as the properties to be generated belong to difference logic, i.e., the fragment of linear arithmetic where atoms are inequalities of the form u v = k. However, so far constraint-based techniques have not exploited this fact: in general, Farkas’ Lemma is used to produce the constraints over template unknowns, which leads to non-linear SMT problems. Based on classical results of graph theory, in this paper we propose new encodings for generating these constraints when program semantics and templates belong to difference logic. Thanks to this approach, instead of a heavyweight non-linear arithmetic solver, a much cheaper SMT solver for difference logic or linear integer arithmetic can be employed for solving the resulting constraints. We present encouraging experimental results that show the high impact of the proposed techniques on the performance of the VeryMax verification systemPeer ReviewedPostprint (author's final draft
Subsumption Demodulation in First-Order Theorem Proving
Motivated by applications of first-order theorem proving to software analysis, we introduce a new inference rule, called subsumption demodulation, to improve support for reasoning with conditional equalities in superposition-based theorem proving. We show that subsumption demodulation is a simplification rule that does not require radical changes to the underlying superposition calculus. We implemented subsumption demodulation in the theorem prover Vampire, by extending Vampire with a new clause index and adapting its multi-literal matching component. Our experiments, using the TPTP and SMT-LIB repositories, show that subsumption demodulation in Vampire can solve many new problems that could so far not be solved by state-of-the-art reasoners
- …